26 research outputs found
Document-Level Relation Extraction with Reconstruction
In document-level relation extraction (DocRE), graph structure is generally
used to encode relation information in the input document to classify the
relation category between each entity pair, and has greatly advanced the DocRE
task over the past several years. However, the learned graph representation
universally models relation information between all entity pairs regardless of
whether there are relationships between these entity pairs. Thus, those entity
pairs without relationships disperse the attention of the encoder-classifier
DocRE for ones with relationships, which may further hind the improvement of
DocRE. To alleviate this issue, we propose a novel
encoder-classifier-reconstructor model for DocRE. The reconstructor manages to
reconstruct the ground-truth path dependencies from the graph representation,
to ensure that the proposed DocRE model pays more attention to encode entity
pairs with relationships in the training. Furthermore, the reconstructor is
regarded as a relationship indicator to assist relation classification in the
inference, which can further improve the performance of DocRE model.
Experimental results on a large-scale DocRE dataset show that the proposed
model can significantly improve the accuracy of relation extraction on a strong
heterogeneous graph-based baseline.Comment: 9 pages, 5 figures, 6 tables. Accepted by AAAI 2021 (Long Paper
Context Consistency between Training and Testing in Simultaneous Machine Translation
Simultaneous Machine Translation (SiMT) aims to yield a real-time partial
translation with a monotonically growing the source-side context. However,
there is a counterintuitive phenomenon about the context usage between training
and testing: e.g., the wait-k testing model consistently trained with wait-k is
much worse than that model inconsistently trained with wait-k' (k' is not equal
to k) in terms of translation quality. To this end, we first investigate the
underlying reasons behind this phenomenon and uncover the following two
factors: 1) the limited correlation between translation quality and training
(cross-entropy) loss; 2) exposure bias between training and testing. Based on
both reasons, we then propose an effective training approach called context
consistency training accordingly, which makes consistent the context usage
between training and testing by optimizing translation quality and latency as
bi-objectives and exposing the predictions to the model during the training.
The experiments on three language pairs demonstrate our intuition: our system
encouraging context consistency outperforms that existing systems with context
inconsistency for the first time, with the help of our context consistency
training approach
Bridge Lessons Learned from the Wenchuan China, Earthquake
A strong earthquake of M7.9 occurred in Wenchuan County in Sichuan Province, China, on May 12, 2008. This paper presents the field observations on various types of bridge damages, including unseating of girders, longitudinal and transverse offset of decks, pounding at expansion joints, shear key failure, bearing displacement, column shear, and flexible cracks. Plausible causes of damages and collapses are discussed and the lessons learned from this event are briefly summarized. Some of the postearthquake temporary constructions are also reported